Goto

Collaborating Authors

 chordal markov network


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This work develops a new exact algorithm for structure learning of chordal Markov networks (MN) under decomposable score functions. The algorithm implements a dynamic programming approach by introducing recursive partition tree structures, which are junction tree equivalent structures that well suit the decomposition of the problem into smaller instances so to enable dynamic programming. The authors review the literature, prove the correctness of their algorithm and compare it against a modified version of GOBNILP, which is implements an state-of-the-art method for Bayesian network exact structure learning. The paper is well-written, relevant for NIPS and technically sound.




Learning Chordal Markov Networks by Dynamic Programming

Kangas, Kustaa, Koivisto, Mikko, Niinimäki, Teppo

Neural Information Processing Systems

We present an algorithm for finding a chordal Markov network that maximizes any given decomposable scoring function. The algorithm is based on a recursive characterization of clique trees, and it runs in O(4^n) time for n vertices. On an eight-vertex benchmark instance, our implementation turns out to be about ten million times faster than a recently proposed, constraint satisfaction based algorithm (Corander et al., NIPS 2013). Within a few hours, it is able to solve instances up to 18 vertices, and beyond if we restrict the maximum clique size. We also study the performance of a recent integer linear programming algorithm (Bartlett and Cussens, UAI 2013). Our results suggest that, unless we bound the clique sizes, currently only the dynamic programming algorithm is guaranteed to solve instances with around 15 or more vertices.